Clever algorithm. The java code is [available on the net](https://github.com/drovandi/GraphCompressionByBFS).
In 2009, it was almost simultaneously published with [Permuting Web and Social Graphs](https://papers-gamma.link/paper/177) of Boldi et al.
At this time it is better than Boldi et al. solution in many cases.
The Apostolico-Drovandi paper is mentioned as "The only coordinate-free compression algorithm we are aware of"
in [another](https://papers-gamma.link/paper/105) Boldi et al. paper which was published after a while. At that time Boldi et al. provide better results.
Clever algorithm. The java code is [available on the net](https://github.com/drovandi/GraphCompressionByBFS).
In 2009, it was almost simultaneously published with [Permuting Web and Social Graphs](https://papers-gamma.link/paper/177) of Boldi et al.
At this time it is better than Boldi et al. solution in many cases.
The Apostolico-Drovandi paper is mentioned as "The only coordinate-free compression algorithm we are aware of"
in [another](https://papers-gamma.link/paper/105) Boldi et al. paper which was published after a while. At that time Boldi et al. provide better results.
Great paper!
### Input graph in main memory
It seems that to compute the suggested ordering (later used to compress the input graph), the graph needs to be stored in the main memory (as an adjacency list). However, if the graph already fits in the main memory of the machine, then compressing it is less interesting.
Some experiments are carried on huge graphs that do not fit in the main memory of the machine if not compressed. The trick is that the web graphs are already compressed (maybe with the lexicographic url ordering) to compute the suggested ordering, while the considered social networks are actually not that large and fit in the main memory of a commodity machine.
Footnote 21: "It is possible in principle to avoid keeping the graph in the main memory, but the cost becomes $O(n \log n)$.". How can I do that?
### Heuristic to minimize the average gap cost
For social networks, it is shown that the compression is highly correlated to the average gap cost (average log gaps) if the "intervalisation" of the BV framework is turned off. The authors note that the suggested ordering is excellent at minimizing this average gap cost. And that even though it does not seem to minimize it directly.
Can a heuristic that is explicitly designed to minimize this average gap cost lead to a better compression?
### Typos:
- ref lacking: "label propagation [RAK07, ?]"
- "until it is possible to do so" -> "until it is not possible to do so"
- ref lacking: "Absolute Pott Model (APM) [?]"
- "tecniques"
- "Some simple experiments not reported here shows that the same happen" -> "Some simple experiments not reported here show that the same happens"
Great paper!
### Input graph in main memory
It seems that to compute the suggested ordering (later used to compress the input graph), the graph needs to be stored in the main memory (as an adjacency list). However, if the graph already fits in the main memory of the machine, then compressing it is less interesting.
Some experiments are carried on huge graphs that do not fit in the main memory of the machine if not compressed. The trick is that the web graphs are already compressed (maybe with the lexicographic url ordering) to compute the suggested ordering, while the considered social networks are actually not that large and fit in the main memory of a commodity machine.
Footnote 21: "It is possible in principle to avoid keeping the graph in the main memory, but the cost becomes $O(n \log n)$.". How can I do that?
### Heuristic to minimize the average gap cost
For social networks, it is shown that the compression is highly correlated to the average gap cost (average log gaps) if the "intervalisation" of the BV framework is turned off. The authors note that the suggested ordering is excellent at minimizing this average gap cost. And that even though it does not seem to minimize it directly.
Can a heuristic that is explicitly designed to minimize this average gap cost lead to a better compression?
### Typos:
- ref lacking: "label propagation [RAK07, ?]"
- "until it is possible to do so" -> "until it is not possible to do so"
- ref lacking: "Absolute Pott Model (APM) [?]"
- "tecniques"
- "Some simple experiments not reported here shows that the same happen" -> "Some simple experiments not reported here show that the same happens"
An interested reader may also take a look at [Apostolico-Drovandi](https://www.mdpi.com/1999-4893/2/3/1031/pdf) paper and their [code](https://github.com/drovandi/GraphCompressionByBFS).
that often works better than BV framework with some exceptions.
Furthermore, in the paper about [Layered Label Propagation](https://papers-gamma.link/paper/105/) Boldi et al. improved their Gray code based order from this paper.
An interested reader may also take a look at [Apostolico-Drovandi](https://www.mdpi.com/1999-4893/2/3/1031/pdf) paper and their [code](https://github.com/drovandi/GraphCompressionByBFS).
that often works better than BV framework with some exceptions.
Furthermore, in the paper about [Layered Label Propagation](https://papers-gamma.link/paper/105/) Boldi et al. improved their Gray code based order from this paper.
Comments: